11 research outputs found

    Calibrated imputation of numerical data under linear edit restrictions

    No full text
    A common problem faced by statistical offices is that data may be missing from collected data sets. The typical way to overcome this problem is to impute the missing data. The problem of imputing missing data is complicated by the fact that statistical data often have to satisfy certain edit rules and that values of variables sometimes have to sum up to known totals. Standard imputation methods for numerical data as described in the literature generally do not take such edit rules and totals into account. In the paper we describe algorithms for imputation of missing numerical data that do take edit restrictions into account and that ensure that sums are calibrated to known totals. The methods sequentially impute the missing data, i.e. the variables with missing values are imputed one by one. To assess the performance of the imputation methods a simulation study is carried out as well as an evaluation study based on a real dataset

    Neue Ergebnisse zur Jagodzinski-Langeheine-Debatte in der ZA-Information 1987

    Full text link
    'In den beiden Heften der ZA-Information 1987 haben Jagodzinski und Langeheine eine Debatte darüber geführt, ob Chi-Quadrat basierte Teststatistiken zur Beurteilung der Anpassungsgüte einer Reihe von latent class Modellen für die durch sparseness gekennzeichnete 3x3x3 Tabelle des Postmaterialismus Panels sinnvoll sind. In dieser Arbeit berichten wir über eine Reevaluation der im Zentrum stehenden Modelle mit Hilfe des non-naiven, parametrischen Bootstrap.' (Autorenreferat)'This paper is tied to a debate of Jagodzinski and Langeheine documented in the two 1987 issues of the ZA-Information about whether chi-square based statistics make sense in assessing model fit of a variety of latent class models fit to the 3x3x3 sparse data postmaterialism panel table. The models which have been focused on in the debate are re-evaluated using non-naive, parametric bootstrap procedures.' (author's abstract)

    Optimal adjustments for inconsistency in imputed data

    No full text
    Imputed micro data often contain conflicting information. The situation may e.g., arise from partial imputation, where one part of the imputed record consists of the observed values of the original record and the other the imputed values. Edit-rules that involve variables from both parts of the record will often be violated. Or, inconsistency may be caused by adjustment for errors in the observed data, also referred to as imputation in Editing. Under the assumption that the remaining inconsistency is not due to systematic errors, we propose to make adjustments to the micro data such that all constraints are simultaneously satisfied and the adjustments are minimal according to a chosen distance metric. Different approaches to the distance metric are considered, as well as several extensions of the basic situation, including the treatment of categorical data, unit imputation and macro-level benchmarking. The properties and interpretations of the proposed methods are illustrated using business-economic data

    Macro-Integration for Solving Large Data Reconciliation Problems

    No full text
    Macro-integration technique is a well established method for reconciliation of large, high-dimensional tables, especially applied to macroeconomic data at national statistical oces (NSO). This technique is mainly used when data obtained from dierent sources should be reconciled on a macro level. New areas of applications for this technique arise as new data sources become available to NSO's. Often these new data sources cannot be combined on a micro level, while macro integration could provide a solution for such problems. Yet, more research should be carried out to investigate if in such situations macro integration could indeed be applied. In this paper we propose two applications of macro-integration techniques in other domains than the traditional macro-economic applications. In particular: reconciliation of tables of a virtual census and reconciliation of monthly series of short term statistics gures with the quarterly gures of structural business statistics

    CALIBRATED IMPUTATION OF NUMERICAL DATA UNDER LINEAR EDIT RESTRICTIONS

    No full text
    Abstract: A common problem faced by statistical institutes is that data may be missing from collected datasets. The typical way to overcome this problem is to impute the missing data. The problem of imputing missing data is complicated by the fact that statistical data often have to satisfy certain edit rules and that values of variables across units sometimes have to sum up to known totals. For numerical data, edit rules are most often formulated as linear restrictions on the variables. For example, for data on enterprises edit rules could be that the profit and costs of an enterprise should sum up to its turnover and that the turnover should be at least zero. The totals of some variables across units may already be known from administrative data (e.g. turnover from a tax register) or estimated from other sources. Standard imputation methods for numerical data as described in the literature generally do not take such edit rules and totals into account. In this article we describe algorithms for imputing missing numerical data that take edit restrictions into account and ensure that sums are calibrated to known totals. These algorithms are based on a sequential regression approach that uses regression predictions to impute the variables one by one. To assess the performance of the imputation methods a simulation study is carried out as well as an evaluation study based on a real dataset

    Uncertainty measures for economic accounts

    No full text
    The problem of adjusting economic or social accounts can be quite complex when large accounting equation systems are considered. This is especially true if they must fulfill predefined, known functional relationships. For such complex systems, evaluating the accuracy of the estimates after the adjustment is difficult since they are defined by unadjusted initial estimates, the accounting equations and the adjustment method. In this paper, we consider such systems as a single entity and develop scalar uncertainty measures that capture the adjustment effect as well as the relative contribution of the various input estimates to the final estimated account. The scalar measures are based on the first two moments of the joint distribution of the underlying true accounting system without requiring specification of the distribution in full. Scalar measures can help to effectively communicate to the users the relevant uncertainty of disseminated macro-economic accounts, and can assist the producer in choosing and improving adjustment method and input estimators. The proposed approach will be illustrated both analytically and by simulation. Applications to supply and use tables and to time series data will be presented

    Predictive mean matching imputation of semicontinuous variables

    No full text
    Multiple imputation methods properly account for the uncertainty of missing data. One of those methods for creating multiple imputations is predictive mean matching (PMM), a general purpose method. Little is known about the performance of PMM in imputing non-normal semicontinuous data (skewed data with a point mass at a certain value and otherwise continuously distributed). We investigate the performance of PMM as well as dedicated methods for imputing semicontinuous data by performing simulation studies under univariate and multivariate missingness mechanisms. We also investigate the performance on real-life datasets. We conclude that PMM performance is at least as good as the investigated dedicated methods for imputing semicontinuous data and, in contrast to other methods, is the only method that yields plausible imputations and preserves the original data distributions

    6-Mercaptopurine Inhibits Atherosclerosis in Apolipoprotein E*3-Leiden Transgenic Mice Through Atheroprotective Actions on Monocytes and Macrophages

    No full text
    Objective-6-Mercaptopurine (6-MP), the active metabolite of the immunosuppressive prodrug azathioprine, is commonly used in autoimmune diseases and transplant recipients, who are at high risk for cardiovascular disease. Here, we aimed to gain knowledge on the action of 6-MP in atherosclerosis, with a focus on monocytes and macrophages. Methods and Results-We demonstrate that 6-MP induces apoptosis of THP-1 monocytes, involving decreased expression of the intrinsic antiapoptotic factors B-cell CLL/Lymphoma-2 (Bcl-2) and Bcl2-like 1 (Bcl-xL). In addition, we show that 6-MP decreases expression of the monocyte adhesion molecules platelet endothelial adhesion molecule-1 (PECAM-1) and very late antigen-4 (VLA-4) and inhibits monocyte adhesion. Screening of a panel of cytokines relevant to atherosclerosis revealed that 6-MP robustly inhibits monocyte chemoattractant chemokine-1 (MCP-1) expression in macrophages stimulated with lipopolysaccharide (LPS). Finally, local delivery of 6-MP to the vessel wall, using a drug-eluting cuff, attenuates atherosclerosis in hypercholesterolemic apolipoprotein E*3-Leiden transgenic mice (P <0.05). In line with our in vitro data, this inhibition of atherosclerosis by 6-MP was accompanied with decreased lesion monocyte chemoattractant chemokine-1 levels, enhanced vascular apoptosis, and reduced macrophage content. Conclusion-We report novel, previously unrecognized atheroprotective actions of 6-MP in cultured monocytes/macrophages and in a mouse model of atherosclerosis, providing further insight into the effect of the immunosuppressive drug azathioprine in atherosclerosis. (Arterioscler Thromb Vasc Biol. 2010;30:1591-1597.
    corecore